107 research outputs found

    Neural activity with spatial and temporal correlations as a basis to simulate fMRI data

    Get PDF
    In the development of data analysis techniques, simulation studies are constantly gaining more interest. The largest challenge in setting up a simulation study is to create realistic data. This is especially true for generating fMRI data, since there is no consensus about the biological and physical relationships underlying the BOLD signal. Most existing simulation studies start from empirically acquired resting data to obtain realistic noise and add known activity (e.g., Bianciardi et al., 2004). However, since you have no control over the noise, it is hard to use these kinds of data in simulation studies. Others use the Bloch equations to simulate fMRI data (e.g., Drobnjak et al., 2006). Even though they get realistic data, this process is very slow involving a lot of calculations which might be unnecessary in a simulation study. We propose a new basis for generating fMRI data starting from a neural activation map where the neural activity is correlated between different locations, both spatial and temporal. A biologically inspired model can then be used to simulate the BOLD respons

    neuRosim: an R package for simulation of fMRI magnitude data with realistic noise

    Get PDF
    Statistical analysis techniques for highly complex structured data such as fMRI data should be thoroughly validated. In this process, knowing the ground truth is essential. Unfortunately, establishing the ground truth of fMRI data is only possible with highly invasive procedures (i.e. intracranial EEG). Therefore, generating the data artificially is often the only viable solution. However, there is currently no consensus among researchers on how to simulate fMRI data. Research groups develop their own methods and use only inhouse software routines. A general validition of these methods is lacking, probably due to the nonexistance of well-documented and freely available software

    Citizen surveillance for environmental monitoring:combining the efforts of citizen science and crowdsourcing in a quantitative data framework

    Get PDF
    Citizen science and crowdsourcing have been emerging as methods to collect data for surveillance and/or monitoring activities. They could be gathered under the overarching term citizen surveillance. The discipline, however, still struggles to be widely accepted in the scientific community, mainly because these activities are not embedded in a quantitative framework. This results in an ongoing discussion on how to analyze and make useful inference from these data. When considering the data collection process, we illustrate how citizen surveillance can be classified according to the nature of the underlying observation process measured in two dimensions—the degree of observer reporting intention and the control in observer detection effort. By classifying the observation process in these dimensions we distinguish between crowdsourcing, unstructured citizen science and structured citizen science. This classification helps the determine data processing and statistical treatment of these data for making inference. Using our framework, it is apparent that published studies are overwhelmingly associated with structured citizen science, and there are well developed statistical methods for the resulting data. In contrast, methods for making useful inference from purely crowd-sourced data remain under development, with the challenges of accounting for the unknown observation process considerable. Our quantitative framework for citizen surveillance calls for an integration of citizen science and crowdsourcing and provides a way forward to solve the statistical challenges inherent to citizen-sourced data

    Simulating fMRI data: the R package neuRosim

    Get PDF
    Functional magentic resonance imaging (fMRI) is used as a powerful imaging tool to locate BOLD activation in time and in particular in place. With about 3300 publications in 2010 and heading the same number in 2011, the technique is also very popular in the neuroimaging community. Unfortunately, the ground truth of the data acquired using fMRI is unknown. This is a major problem because the location of activity in the data requires complex analysis processes that need to be validated to ensure that the analysis techniques are working properly. Validation is only possible if the ground truth is known. As a solution, fMRI data are generated artificially. However, simulation studies could be considered as a minority in the fMRI literature (only 100 publications in 2010). Moreover, there is currently no consensus on how to simulate fMRI data, neither is there any attempt made to converge and validate the existing simulation methods. Generally, simulation studies are conducted using ad-hoc methods and in-house software routines. neuRosim is an R package (http://www.r-project.org) that aims to serve as a general standardized software platform to simulate fMRI data. Currently, the package gathers the functionalities of existing simulation studies with the extension to more biophysically plausible models. The main focus lies on the inclusion of several noise sources (for example system noise, temporally and spatially correlated noise, physiological noise, . . . ). neuRosim can be downloaded from CRAN (http://cran.r-project.org) and is released with a GPL licence, meaning that it is completely open-source and can be freely used under almost all platforms (Windows, Mac and Unix). The data generation in neuRosim is fairly fast and, depending on the dimensions of the dataset, can be easily computed on a standard desktop within a few minutes. Therefore, the simulation process can be smoothly incorporated in large simulation studies. During the presentation, we will stress the importance for validated simulation research and demonstrate how neuRosim can contribute by comparing the differences between implemented simulation methods. Further we will show examples that demonstrate the functionalities of the package and discuss briefly our plans for coming updates

    Simulation of fMRI data: a statistical approach

    Get PDF

    Adaptive smoothing as inference strategy: More specificity for unequally sized or neighboring regions

    Get PDF
    Although spatial smoothing of fMRI data can serve multiple purposes, increasing the sensitivity of activation detection is probably its greatest benefit. However, this increased detection power comes with a loss of specificity when non-adaptive smoothing (i.e.\ the standard in most software packages) is used. Simulation studies and analysis of experimental data was performed using the R packages neuRosim and fmri. In these studies, we systematically investigated the effect of spatial smoothing on the power and number of false positives in two particular cases that are often encountered in fMRI research: (1) Single condition activation detection for regions that differ in size, and (2) multiple condition activation detection for neighbouring regions. Our results demonstrate that adaptive smoothing is superior in both cases because less false positives are introduced by the spatial smoothing process compared to standard Gaussian smoothing or FDR inference of unsmoothed data
    • …
    corecore